40 research outputs found

    Rational Krylov for Stieltjes matrix functions: convergence and pole selection

    Full text link
    Evaluating the action of a matrix function on a vector, that is x=f(M)vx=f(\mathcal M)v, is an ubiquitous task in applications. When M\mathcal M is large, one usually relies on Krylov projection methods. In this paper, we provide effective choices for the poles of the rational Krylov method for approximating xx when f(z)f(z) is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is equivalent, completely monotonic) and M\mathcal M is a positive definite matrix. Relying on the same tools used to analyze the generic situation, we then focus on the case M=I⊗A−BT⊗I\mathcal M=I \otimes A - B^T \otimes I, and vv obtained vectorizing a low-rank matrix; this finds application, for instance, in solving fractional diffusion equation on two-dimensional tensor grids. We see how to leverage tensorized Krylov subspaces to exploit the Kronecker structure and we introduce an error analysis for the numerical approximation of xx. Pole selection strategies with explicit convergence bounds are given also in this case

    A low-rank technique for computing the quasi-stationary distribution of subcritical Galton-Watson processes

    Get PDF
    We present a new algorithm for computing the quasi-stationary distribution of subcritical Galton--Watson branching processes. This algorithm is based on a particular discretization of a well-known functional equation that characterizes the quasi-stationary distribution of these processes. We provide a theoretical analysis of the approximate low-rank structure that stems from this discretization, and we extend the procedure to multitype branching processes. We use numerical examples to demonstrate that our algorithm is both more accurate and more efficient than other approaches

    Solving rank structured Sylvester and Lyapunov equations

    Full text link
    We consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand side have low-rank off-diagonal blocks. This comprises problems with banded data, recently studied by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for large-scale interconnected systems", Automatica, 2016, and by Palitta and Simoncini in "Numerical methods for large-scale Lyapunov equations with symmetric banded data", SISC, 2018, which often arise in the discretization of elliptic PDEs. We show that, under suitable assumptions, the quasiseparable structure is guaranteed to be numerically present in the solution, and explicit novel estimates of the numerical rank of the off-diagonal blocks are provided. Efficient solution schemes that rely on the technology of hierarchical matrices are described, and several numerical experiments confirm the applicability and efficiency of the approaches. We develop a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for the solvers. The performances of the different approaches are compared, and we show that the new methods described are efficient on several classes of relevant problems

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption

    On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices

    Full text link
    The problem of finding a k×kk \times k submatrix of maximum volume of a matrix AA is of interest in a variety of applications. For example, it yields a quasi-best low-rank approximation constructed from the rows and columns of AA. We show that such a submatrix can always be chosen to be a principal submatrix if AA is symmetric semidefinite or diagonally dominant. Then we analyze the low-rank approximation error returned by a greedy method for volume maximization, cross approximation with complete pivoting. Our bound for general matrices extends an existing result for symmetric semidefinite matrices and yields new error estimates for diagonally dominant matrices. In particular, for doubly diagonally dominant matrices the error is shown to remain within a modest factor of the best approximation error. We also illustrate how the application of our results to cross approximation for functions leads to new and better convergence results

    On Functions of quasi Toeplitz matrices

    Full text link
    Let a(z)=∑i∈Zaizia(z)=\sum_{i\in\mathbb Z}a_iz^i be a complex valued continuous function, defined for ∣z∣=1|z|=1, such that ∑i=−∞+∞∣iai∣<∞\sum_{i=-\infty}^{+\infty}|ia_i|<\infty. Consider the semi-infinite Toeplitz matrix T(a)=(ti,j)i,j∈Z+T(a)=(t_{i,j})_{i,j\in\mathbb Z^+} associated with the symbol a(z)a(z) such that ti,j=aj−it_{i,j}=a_{j-i}. A quasi-Toeplitz matrix associated with the continuous symbol a(z)a(z) is a matrix of the form A=T(a)+EA=T(a)+E where E=(ei,j)E=(e_{i,j}), ∑i,j∈Z+∣ei,j∣<∞\sum_{i,j\in\mathbb Z^+}|e_{i,j}|<\infty, and is called a CQT-matrix. Given a function f(x)f(x) and a CQT matrix MM, we provide conditions under which f(M)f(M) is well defined and is a CQT matrix. Moreover, we introduce a parametrization of CQT matrices and algorithms for the computation of f(M)f(M). We treat the case where f(x)f(x) is assigned in terms of power series and the case where f(x)f(x) is defined in terms of a Cauchy integral. This analysis is applied also to finite matrices which can be written as the sum of a Toeplitz matrix and of a low rank correction

    Efficient cyclic reduction for QBDs with rank structured blocks

    Full text link
    We provide effective algorithms for solving block tridiagonal block Toeplitz systems with m×mm\times m quasiseparable blocks, as well as quadratic matrix equations with m×mm\times m quasiseparable coefficients, based on cyclic reduction and on the technology of rank-structured matrices. The algorithms rely on the exponential decay of the singular values of the off-diagonal submatrices generated by cyclic reduction. We provide a formal proof of this decay in the Markovian framework. The results of the numerical experiments that we report confirm a significant speed up over the general algorithms, already starting with the moderately small size m≈102m\approx 10^2

    A nested divide-and-conquer method for tensor Sylvester equations with positive definite hierarchically semiseparable coefficients

    Full text link
    Linear systems with a tensor product structure arise naturally when considering the discretization of Laplace type differential equations or, more generally, multidimensional operators with separable coefficients. In this work, we focus on the numerical solution of linear systems of the form (I⊗⋯⊗I⊗A1+⋯+Ad⊗I⊗⋯⊗I)x=b, \left(I\otimes \dots\otimes I \otimes A_1+\dots + A_d\otimes I \otimes\dots \otimes I\right)x=b, where the matrices At∈Rn×nA_t\in\mathbb R^{n\times n} are symmetric positive definite and belong to the class of hierarchically semiseparable matrices. We propose and analyze a nested divide-and-conquer scheme, based on the technology of low-rank updates, that attains the quasi-optimal computational cost O(nd(log⁥(n)+log⁥(Îș)2+log⁥(Îș)log⁥(ϔ−1)))\mathcal O(n^d (\log(n) + \log(\kappa)^2 + \log(\kappa) \log(\epsilon^{-1}))) where Îș\kappa is the condition number of the linear system, and Ï”\epsilon the target accuracy. Our theoretical analysis highlights the role of inexactness in the nested calls of our algorithm and provides worst case estimates for the amplification of the residual norm. The performances are validated on 2D and 3D case studies

    Optimizing network robustness via Krylov subspaces

    Full text link
    We consider the problem of attaining either the maximal increase or reduction of the robustness of a complex network by means of a bounded modification of a subset of the edge weights. We propose two novel strategies combining Krylov subspace approximations with a greedy scheme and an interior point method employing either the Hessian or its approximation computed via the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS). The paper discusses the computational and modeling aspects of our methodology and illustrates the various optimization problems on networks that can be addressed within the proposed framework. Finally, in the numerical experiments we compare the performances of our algorithms with state-of-the-art techniques on synthetic and real-world networks
    corecore